Mastering NLP from Foundations to LLMs by Lior Gazit Meysam Ghaffari

Mastering NLP from Foundations to LLMs by Lior Gazit Meysam Ghaffari

Author:Lior Gazit, Meysam Ghaffari
Language: eng
Format: epub
Publisher: Packt Publishing Pvt Ltd
Published: 2024-04-04T00:00:00+00:00


Fine-tuning

You use the pretrained model as a starting point and update all or some of the model’s parameters for your new task. In other words, you continue the training where it left off, allowing the model to adjust from generic feature extraction to features more specific to your task. Often, a lower learning rate is used during fine-tuning to avoid overwriting the prelearned features entirely during training.

Transfer learning is a powerful technique that can be used to improve the performance of ML models. It is particularly useful for tasks where there are limited labeled data available. It is commonly used in DL applications. For instance, it’s almost a standard in image classification problems where pretrained models on ImageNet, a large-scale annotated image dataset (ResNet, VGG, Inception, and so on), are used as the starting point. The features learned by these models are generic for image classification and can be fine-tuned on a specific image classification task with a smaller amount of data.

Here are some examples of how transfer learning can be used:

A model trained to classify images of cats and dogs can be used to fine-tune a model to classify images of other animals, such as birds or fish

A model trained to translate text from English to Spanish can be used to fine-tune a model to translate text from Spanish to French

A model trained to predict the price of a house can be used to fine-tune a model to predict the price of a car



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.